How to set UserDefaults with XCUItest - ios

I am doing iOS UI testing with XCUITest.
Since we do not have access to the app, how do we set defaults to the app?

You can pass all the required data using launch arguments.
Please read documentation
https://developer.apple.com/documentation/xctest/xcuiapplication/1500477-launcharguments
The other (and a bit slower) option is to use deep links.

This sounds much more complex than it is, but a technique that has worked for me is to set up an HTTP server in the testing suite that you can use to fetch mock data in your test code. I have had success with Embassy and Ambassador.
So you'd pass in a launch argument telling your app code to fetch from the server. For the case of UserDefaults a helper class for making these specific requests to the local endpoint works well. This unfortunately means your app code has to be doing some setup for testing, but depending on your needs it could be a good compromise.

Another possible solution to crossing the process boundary:
If you are not doing on device testing, you can access "SIMULATOR_SHARED_RESOURCES_DIRECTORY" and provide the data in a file for your test to consume.
On a real device this would be more difficult because you would need to use a shared Group container.

You probably can use "Application Data Package". It's when you save your app state into a container and then run tests with the saved environment.
There are many guides how to do it, that's just one of them:
https://www.codementor.io/paulzabelin/xcode-app-data-suni6p4ma
It might me a more powerful and overcomplicated thing that you need, but opens a big world of possibilities ;)

Related

XCUITest: How to jump into app code? How to modify the state of the app under test?

Coming from an Android/Espresso background, I am still struggling with XCUITest and UI testing for iOS.
My question is about two related but distinct issues:
How to compile and link against sources of the app under test?
How to jump into methods of the app under test and modify its state at runtime?
To tackle these questions, we should first understand the differences between XCode's "unit test targets" and "UI test targets".
XCUITests run inside a completely separate process and cannot jump into methods of the app under test. Moreover, by default, XCUITests are not linked against any sources of the app under test.
In contrast, XCode's unit tests are linked against the app sources. There is also the option to do "#testable imports". In theory, this means that unit tests can jump into arbitrary app code. However, unit tests do not run against the actual app. Instead, unit tests run against a stripped-down version of the iOS SDK without any UI.
Now there are different workarounds for these constraints:
Add some selected source files to a UI test target. This does not enable to call into the app, but at least it enables to share selected code between app and UI tests.
Pass launch arguments via CommandLine.arguments from the UI tests to the app under test. This enables to apply test-specific configurations to the app under test. However, those launch arguments need to be parsed and interpreted by the app, which leads to a pollution of the app with testing code. Moreover, launch arguments are only a non-interactive way to change the behavior of the app under test.
Implement a "debug UI" that is only accessible for XCUITest. Again, this has the drawback of polluting app code.
This leads to my concluding questions:
Which alternative methods exist to make XCUI tests more powerful/dynamic/flexible?
Can I compile and link UI tests against the entire app source and all pod dependencies, instead of only a few selected files?
Is it possible to gain the power of Android's instrumented tests + Espresso, where we can perform arbitrary state modifications on the app under test?
Why We Need This
In response to #theMikeSwan, I would like to clarify my stance on UI test architecture.
UI Tests should not need to link to app code, they are designed to
simulate a user tapping away inside your app. If you were to jump into
the app code during these tests you would no longer be testing what
your app does in the the real world you would be testing what it does
when manipulated in a way no user ever could. UI tests should not have
any need of any app code any more than a user does.
I agree that manipulating the app in such a way is an anti-pattern that should be only used in rare situations.
However, I have a very different stance on what should be possible.
In my view, the right approach for UI tests is not black-box testing but gray-box testing. Although we want UI tests to be as black-boxy as possible, there are situations where we want to dig deep into implementation details of the app under test.
Just to give you a few examples:
Extensibility: No UI testing framework can provide an API for each and every use case. Project requirements are different and there are times where we want to write our own function to modify the app state.
Internal state assertions: I want to be able to write custom assertions for the state of the app (assertions that do not only rely on the UI). In my current Android project, we had a notoriously broken subsystem. I asserted this subsystem with custom methods to guard against regression bugs.
Shared mock objects: In my current Android project, we have custom hardware that is not available for UI tests. We replaced this hardware with mock objects. We run assertions on those mock objects right from the UI tests. These assertions work seamlessly via shared memory. Moreover, I do not want to pollute the app code with all the mock implementations.
Keep test data outside: In my current Android project, we load test data from JUnit right into the app. With XCUITest's command line arguments, this would be way more limited.
Custom synchronization mechanisms: In my current Android project, we have wrapper classes around multithreading infrastructure to synchronize our UI tests with background tasks. This synchronization is hard to achieve without shared memory (e.g. Espresso IdlingResources).
Trivial code sharing: In my current iOS project, I share a simple definition file for the aforementioned launch arguments. This enables to pass launch arguments in a typesafe way without duplicating string literals. Although this is a minor use case, it still shows that selected code sharing can be valuable.
For UI tests you shouldn't have to pollute your app code too much. You
could use a single command line argument to indicate UI tests are
running and use that to load up some test data, login a test user, or
pick a testing endpoint for network calls. With good architecture you
will only need to make the adjustment once when the app first launches
with the rest of your code oblivious that it is using test data (much
like if you have a development environment and a production
environment that you switch between for network calls).
This is exactly the thing that I am doing in my current iOS project, and this is exactly the thing that I want to avoid.
Although a good architecture can avoid too much havoc, it is still a pollution of the app code. Moreover, this does not solve any of the use cases that I highlighted above.
By proposing such a solution, you essentially admit that radical black-box testing is inferior to gray-box testing.
As in many parts of life, a differentiated view is better than a radical "use only the tools that we give you, you should not need to do this".
UI Tests should not need to link to app code, they are designed to simulate a user tapping away inside your app. If you were to jump into the app code during these tests you would no longer be testing what your app does in the the real world you would be testing what it does when manipulated in a way no user ever could. UI tests should not have any need of any app code any more than a user does.
For unit tests and integration tests of course you use #testable import … to get access to any methods and properties that are not marked private or fileprivate. Anything marked private or fileprivate will still be inaccessible from test code, but everything else including internal will be accessible. These are the tests where you should intentionally throw data in that can't possibly occur in the real world to make sure your code can handle it. These tests should still not reach into a method and make any changes or the test won't really be testing how the code behaves.
You can create as many unit test targets as you want in a project and you can use one or more of those targets to hold integration tests rather than unit tests. You can then specify which targets run at various times so that your slower integration tests don't run every time you test and slow you down.
The environment unit and integration tests run in actually has everything. You can create an instance of a view controller and call loadViewIfNeeded() to have the entire view setup. You can then test for the existence of various outlets and trigger them to send actions (Check out UIControl's sendActions(for: ) method). Provided you have setup the necessary mocks this will let you verify that when a user taps button A, a call gets sent to the proper method of thing B.
For UI tests you shouldn't have to pollute your app code too much. You could use a single command line argument to indicate UI tests are running and use that to load up some test data, login a test user, or pick a testing endpoint for network calls. With good architecture you will only need to make the adjustment once when the app first launches with the rest of your code oblivious that it is using test data (much like if you have a development environment and a production environment that you switch between for network calls).
If you want to learn more about testing Swift Paul Hudson has a very good book you can check out https://www.hackingwithswift.com/store/testing-swift. It has plenty of examples of the various kinds of tests and good advice on how to split them up.
Update based on your edits and comments:
It looks like what you really want is Integration Tests. These are easy to miss in the world of Xcode as they don't have their own kind of target to create. They use the Unit Test target but test multiple things working together.
Provided you haven't added private or fileprivate to any of your outlets you can create tests in a Unit Test target that makes sure the outlets exist and then inject text or trigger their actions as needed to simulate a user navigating through your app.
Normally this kind of testing would just go from one view controller to a second one to test that the right view controller gets created when an action happens but nothing says it can't go further.
You won't get images of the screen for a failed test like you do with UI Tests and if you use storyboards make sure to instantiate your view controllers from the storyboard. Be sure that you are grabbing any navigation controllers and the such that are required.
This methodology will allow you to act like you are navigating through the app while being able to manipulate whatever data you need as it goes into various methods.
If you have a method with 10 lines in it and you want to tweak the data between lines 7 and 8 you would need to have an external call to something mockable and make your changes there or use a breakpoint with a debugger command that makes the change. This breakpoint trick is very useful for debugging things but I don't think I would use it for tests since deleting the breakpoint would break the test.
I had to do it for a specific app. What we did is creating a kind of debug menu accessible only for UI tests (using launch arguments to make it available) and displayed using a certain gesture (two taps with two fingers in our case).
This debug menu is just a pop-up view appearing over all screens. In this view, we add buttons which allow us updating the state of the app.
You can then use XCUITest to display this menu and interact with buttons.
I've come across this same problem OP. Coming from the Android ecosystem and attempting to leverage solutions for iOS will have you banging your head as to why Apple does things this way. It makes things difficult.
In our case we replicated a network mocking solution for iOS that allows us to control the app state using static response files. However, Using a standalone proxy to do this makes running XCUITest difficult on physical devices. Swift provides some underlying Foundations.URLSession features that allow you to do the same thing from inside of the configured session objects. (See URLProtocolClasses)
Now, our UI tests are having an IPC problem since the runner app is in its own process. Prior the proxy and UI tests lived in the same process so it was easy to control responses returned for certain requests.
This can be done using some odd bridges for interprocess communication, like CFMessaging and some others (see NSHipster here)
Hope this helps.
how exactly does the unit test environment work
The unit tests are in a bundle which is injected into the app.
Actually several bundles are injected. The overarching XCTest bundle is a framework called XCTest.framework and you can actually see it inside the built app:
Your tests are a bundle too, with an .xctest suffix, and you can see that in the built app as well:
Okay, let's say you ask for one or more tests to run.
The app is compiled and runs in the normal way on the simulator or the device: for example, if there is a root view controller hierarchy, it is assembled normally, with all launch-time events firing the way they usually do (for instance, viewDidLoad, viewDidAppear, etc.).
At last the launch mechanism takes its hands off. The test runner is willing to wait quite a long time for this moment to arrive. When the test runner sees that this moment as arrived, it executes the test bundle's executable and the tests begin to run. The test code is able to see the main bundle code because it has imported the main bundle as a module, so it is linked to it.
When the tests are finished, the app is abruptly torn down.
So what about UI tests?
UI tests are completely different.
Nothing is injected into your app.
What runs initially is a special separate test runner app, whose bundle identifier is named after your test suits with Runner appended; for example, com.apple.test.MyUITests-Runner. (You may even be able to see the the test runner launch.)
The test runner app in turn backgrounds itself and launches your app in its own special environment and drives it from the outside using the Accessibility framework to "tap" buttons and "see" the interface. It has no access to your app's code; it is completely outside your app, "looking" at its interface and only its interface by means of Accessibility.
We use GCDWebServer to communicate between test and the app under test.
The test asks the app under test to start this local server and then test can talk to app using this server. You can make requests on this server to fetch some data from app as well as to tell app to modify some behavior by providing data.

Asynchronous UI Testing in Xcode With Swift

I am writing an app that makes plenty of network requests. As usual they are
async, i.e. the call of the request method returns immediately and the result
is delivered via a delegate method or in a closure after some delay.
Now on my registration screen I sent a register request to my backend and
want to verify that the success UI is shown when the request finishes.
Which options are out there to wait for the request to finish, verify the
success UI and only after that leave the test method?
Also are there any more clever options than waiting for the request to finish?
Thanks in advance!
Trivial Approach
Apple implemented major improvements in Xcode 9 / iOS 11 that enables you to wait for the appearance of a UI element. You can use the following one-liner:
<#yourElement#>.waitForExistence(timeout: 5)
Advanced Approach
In general UI and unit tests (referred to as tests here) must run as fast as possible so the developer can run them often and does not get frustrated by the need to run a slow test suite multiple times a day. In some cases, there is the possibility that an (internal or security-related) app accesses an API that can only be accessed from certain networks / IP ranges / hosts. Also, most CI services offer pretty bad hardware and limited internet-connection speed.
For all of those reasons, it is recommended to implement tests in a way that they do no real network requests. Instead, they are run with fake data, so-called fixtures. A clever developer realizes this test suite in a way that source of the data can be switched using a simple switch like a boolean property. Additionally, when the switch is set to fetch real backend data the fixtures can be refreshed/recorded from the backend automatically. This way it is pretty easy to update the fake data and quickly detect changes of the API.
But the main advantage of this approach is speed. Your test will not make real network requests but instead run against local data what makes them independent on:
server issues
connection speed
network restrictions
This way you can run your tests very fast and thus much more often - which is a good way of writing code ("Test Driven Development").
On the other hand, you won't detect server changes immediately anymore since the fake data won't change when the backend data changes. But this is solved by simply refreshing your fixtures using the switch you have implemented because you are a smart developer which makes this issue a story you can tell your children!
But wait, I forgot something! Why this is a replacement for the trivial approach above - you ask? Simple! Since you use local data which is available immediately you also can call the completion handler immediately too. So there is no delay between doing the request and verifying your success UI. This means you don't need to wait which makes your tests even faster!
I hope this helps some of my fellows out there. If you need more guidance regarding this topic don't hesitate and reply to this post.
Cya!

Xcode UI Testing Processes

I'm new to UI Testing and am struggling to find any documentation. Could anyone explain the relationship between the different processes that are running while conducting UI testing? From what I have researched, there is one process running the target application, and another running the test code. How do the two interact?
The process running the test code has access only to the UI hierarchy of the target application (unless you're doing sneaky signal passing) and cannot access or modify the data or app logic. The UI hierarchy is called using titles, labels, accessibilityIdentifiers, or accessibilityLabels somewhat interchangeably with a CSS-like selector syntax.
For documentation, there isn't really any of substance from Apple; I'd recommend taking a look at Joe Masilotti's "UI Testing in XCode 7": http://masilotti.com/ui-testing-xcode-7/
I can't leave comments, but a note for when you are UI testing your app, if you have environment variables you will need to pass the environment variables set for your test into your app instance. This one liner helped me out a lot.
app.launchEnvironment = ProcessInfo.processInfo.environment

Unit Testing with Kiwi, Core Data and Magical Record

I'm having issues using a 'fake' store for my Unit Tests.
I have installed Kiwi by adding its framework folder to my project and replacing the Xcode's default test cases with Kiwi tests. These all run fine.
Since I'm using Core Data, I need to create a 'fake' store so I'm playing with the real database. I used http://www.cimgf.com/2012/05/15/unit-testing-with-core-data/ as my basic guide to do this.
However, since Xcode's default test implementation runs tests after launching the app, my '[MagicalRecord setupCoreDataStackWithStoreNamed:#"Store.sqlite"]' is still fired inside the App Delegate before any of the tests run.
By the time the tests try to use '[MagicalRecord setupCoreDataStackWithInMemoryStore]', this sqlite store is set up, and so the in-memory store doesn't get set up (AFAIK), since the aforementioned setup stack method checks first to see if a stack already exists, and just returns without executing anything if it does, so I end up with the sqlite database still.
As far as I can tell, this leaves me with the following options:
Put some environment variables or flags in for the test cases, and check for these in the app delegate, creating the appropriate store depending on this variable (i.e. tweaking my actual code for the sake of testing - not pretty, nor recommended by any practising TDD/BDDers).
Add managed context properties on all my controllers so I can manually specify the store to use (removing a great deal of the niceties of the MagicalRecord singleton access pattern).
Play (carefully) with my actual database (I'm not really willing to even contemplate this).
None of these seems to be a particularly good solution, so I'm hoping someone can see a better solution that I've stupidly overlooked.
Your tests should not be launching the app delegate. Try setting up your tests so that only the tests setup the in-memory core data store, as suggested in the article you reference.

Recommendations to test API request layer in iOS apps using NSOperations and Coredata

I develop an iOS app that uses a REST API. The iOS app requests data in worker threads and stores the parsed results in core data. All views use core data to visualize the information. The REST API changes rapidly and I have no real control over the interface.
I am looking for advice how perform integration tests for the app as easy as possible. Should I test against the API or against Mock data? But how to mock GET requests properly if you can create resources with POST or modify them with PUT?
What frameworks do you use for these kind of problems? I played with Frank, which looks nice but is complicated due to rapid UI changes in the iOS app. How would you test the "API request layer" in the app? Worker threads are NSOperations in a queue - everything is build asynchronously. Any recommendations?
I would strongly advise you to mock the server. Servers go down, the behavior changes, and if a test failure implies "maybe my code still works", you have a problem on your hands, because your test doesn't tell you whether or not the code is broken, which is the whole point.
As for how to mock the server, for a unit test that does this:
first_results = list_things()
delete_first_thing()
results_after_delete = list_thing()
I have a mock data structure that looks like this:
{ list_things_request : [first_results, results_after_delete],
delete_thing_request: [delete_thing_response] }
It's keyed on your request, and the value is an array of responses for that request in the order that they were seen. Thus you can support repeatedly running the same request (like listing the things) and getting a different result. I use this format because in my situation it is possible for my API calls to run in a slightly different order than it did last time. If your tests are simpler, you might be able to get away with a simple list of request/response pairs.
I have a flag in my unit tests that indicate if I am in "record" mode (that is, talking to a real server and recording this data-structure to disk) or if I am in "playback" mode (talking to the datastructure). When I need to work with a test, I "record" the interactions with the server and then play them back.
I use the little-known SenTestCaseDidStartNotification to track which unit test is running and isolate my data files appropriately.
The other thing to keep in mind is that instability is the root of all evil. If you have code that does things with sets, or gets the current date, and such, this tends to change the requests and responses, which do not work in an offline scenario. So be careful with those.
(Since nobody stepped in yet and gave you a complete walkthrough) My humble advice: Step back a bit, take out the magic of async, regard everything as sync (api calls, parsing, persistence), and isolate each step as a consumer/producer. After all you don't wan't to unit-test NSURLConnection, or JSONKit or whatever (they should have been tested if you use them), you want to test YOUR code. Your code takes some input and produces output, non-aware of the fact that the input was in fact the output genereated in a background thread somewhere. You can do the isolated test all sync.
Can we agree on the fact that your Views don't care about how their model data was provided? If yes, well, test your View with mock objects.
Can we agree on the fact that your parser doesn't care about how the data was provided? If yes, well, test your parser with mock data.
Network layer: same applies as described above, in the end you'll get an NSDictionary of headers, and some NSData or NSString of content. I don't think you want to unit-test NSURLConnection or any 3'rd party networking api you trust (asihttp, afnetworking,...?), so in the end, what's to be tested?
You can mock up URLs, request headers and POST data for each use-case you have, and setup test cases for expected responses.
In the end, IMHO, it's all about "normalizing" out asyc.
Take a look at Nocilla
For more info, check this other answer to a similar question

Resources